Skip to content

Conversation

swolchok
Copy link
Contributor

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing not-actually-existent-in-PyTorch functionality. Please let me know if I've made a mistake. I tried to exercise the behavior that this test implied existed like so:

>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)

I think that if the test was correct, the result would have been torch.tensor([1, 2, 3]) with no message. Also, none of our operator tests seem to be failing. Have I missed anything?

swolchok added 4 commits June 26, 2025 13:00
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@swolchok
Copy link
Contributor Author

swolchok commented Jun 26, 2025

@swolchok swolchok requested a review from manuelcandales as a code owner June 26, 2025 21:04
Copy link

pytorch-bot bot commented Jun 26, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12023

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c11e1bd with merge base f673a4b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

swolchok added a commit that referenced this pull request Jun 26, 2025
…ontiguous input

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing not-actually-existent-in-PyTorch functionality. Please let me know if I've made a mistake. I tried to exercise the behavior that this test implied existed like so:
```
>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)
```

I think that if the test was correct, the result would have been torch.tensor([1, 2, 3]) with no message. Also, none of our operator tests seem to be failing. Have I missed anything?


ghstack-source-id: ad2d09d
ghstack-comment-id: 3010027375
Pull-Request-resolved: #12023
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 26, 2025
swolchok added 5 commits June 26, 2025 14:52
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
swolchok added 4 commits June 26, 2025 23:28
[ghstack-poisoned]
[ghstack-poisoned]
@swolchok swolchok changed the base branch from gh/swolchok/479/head to gh/swolchok/484/head June 28, 2025 00:34
@swolchok swolchok added the release notes: none Do not include this in the release notes label Jun 28, 2025
@swolchok
Copy link
Contributor Author

This is a size win. Size script results below, cases with no change edited out for brevity.

test/build_size_test.sh

before:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1377360 Jun 27 17:24 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1064960	65536	0	4295278592	4296409088	100160000

after:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1360464 Jun 27 17:26 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1048576	65536	0	4295278592	4296392704	10015c000

test/build_optimized_size_test.sh

before:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1506384 Jun 27 17:17 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1064960	65536	0	4295393280	4296523776	10017c000
ExecuTorch with optimized ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  4958792 Jun 27 17:17 cmake-out/test/size_test_all_optimized_ops
__TEXT	__DATA	__OBJC	others	dec	hex
3702784	65536	0	4296212480	4299980800	1004c8000

after:

ExecuTorch with portable ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  1505872 Jun 27 17:28 cmake-out/test/size_test_all_ops
__TEXT	__DATA	__OBJC	others	dec	hex
1064960	65536	0	4295393280	4296523776	10017c000
ExecuTorch with optimized ops binary size, unstripped:
-rwxr-xr-x  1 swolchok  staff  4941448 Jun 27 17:28 cmake-out/test/size_test_all_optimized_ops
__TEXT	__DATA	__OBJC	others	dec	hex
3686400	65536	0	4296212480	4299964416	1004c4000

@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 02:21 — with GitHub Actions Inactive
@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 02:21 — with GitHub Actions Inactive
@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 03:23 — with GitHub Actions Inactive
@swolchok swolchok temporarily deployed to upload-benchmark-results June 28, 2025 03:48 — with GitHub Actions Inactive
@mergennachin mergennachin force-pushed the gh/swolchok/484/head branch from a3f0bf7 to 665c8f0 Compare June 28, 2025 04:39
Base automatically changed from gh/swolchok/484/head to main June 28, 2025 05:03
[ghstack-poisoned]
swolchok added a commit that referenced this pull request Jun 30, 2025
…ontiguous input

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing not-actually-existent-in-PyTorch functionality. Please let me know if I've made a mistake. I tried to exercise the behavior that this test implied existed like so:
```
>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)
```

I think that if the test was correct, the result would have been torch.tensor([1, 2, 3]) with no message. Also, none of our operator tests seem to be failing. Have I missed anything?


ghstack-source-id: 37448a6
ghstack-comment-id: 3010027375
Pull-Request-resolved: #12023
@swolchok swolchok merged commit 3ba0466 into main Jul 1, 2025
96 checks passed
@swolchok swolchok deleted the gh/swolchok/480/head branch July 1, 2025 21:00
BujSet pushed a commit to BujSet/executorch that referenced this pull request Jul 2, 2025
…ontiguous input (pytorch#12023)

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing
not-actually-existent-in-PyTorch functionality. Please let me know if
I've made a mistake. I tried to exercise the behavior that this test
implied existed like so:
```
>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)
```

I think that if the test was correct, the result would have been
torch.tensor([1, 2, 3]) with no message. Also, none of our operator
tests seem to be failing. Have I missed anything?
Tanish2101 pushed a commit to Tanish2101/executorch that referenced this pull request Jul 9, 2025
…ontiguous input (pytorch#12023)

In this case, broadcasting is not possible if I understand correctly.

NOTE TO REVIEWERS: I deleted a failing test because I think it's testing
not-actually-existent-in-PyTorch functionality. Please let me know if
I've made a mistake. I tried to exercise the behavior that this test
implied existed like so:
```
>>> t = torch.tensor([1, 2, 3])
>>> t2 = torch.tensor(4)
>>> torch.abs(t2, out=t)
<stdin>:1: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:38.)
tensor(4)
```

I think that if the test was correct, the result would have been
torch.tensor([1, 2, 3]) with no message. Also, none of our operator
tests seem to be failing. Have I missed anything?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants